1 Introduction
Linkage maps are essential tools to several genetic endeavours such as quantitative trait loci (QTL) analysis, evolutionary studies, assessment of colinearity in chromosome-wise scale between genomes, and study of meiotic processes. The principle behind linkage map construction is detecting recombinant events between genomic positions and summarizing them into pairwise recombination fraction estimates. In diploids, the assessment of such a phenomenon is relatively straightforward. After the homolog duplication, sister chromatids pair up exchanging segments. The presence of informative markers, e. g. single nucleotide polymorphisms (SNPs) enable computation of the recombination fraction between pairs of genomic positions by comparing the chromosome constitution of parents and offspring. If the order of markers is unknown, it can be obtained using the pairwise recombination fractions in conjunction with optimization algorithms.
In polyploids (species with more than two sets of homolog chromosomes, or homologs), the construction of such maps is quite challenging. While in diploids a biallelic marker can be fully informative, in polyploids they allow accounting for proportions of biallelic dosages. To recover the multiallelic information present in polyploid species, we need to account for recombination frequencies, estimate phase configurations and reconstruct the haplotypes of both parents and individuals in the population. As the ploidy level increases, the joint computation of genomic regions becomes intensive, and other approaches such as dimension reduction need to be applied.
MAPpoly is an R package fully capable of building genetic linkage maps for biparental populations in polyploid species with ploidy level up to 8x.
This package is part of the Genomic Tools for Sweetpotato Improvement (GT4SP), funded by Bill and Melinda Gates Foundation. All of the procedures presented in this document are detailed in Mollinari and Garcia (2019) and Mollinari et al. (2020). The results obtained with MAPpoly can be readily used for QTL mapping with the QTLpoly package, which implements the procedures proposed by da Silva Pereira et al. (2020).
Some advantages of MAPpoly are:
- Can handle multiple dataset types
- Can handle thousands of markers
- Does not depend on single-dose markers (SDM) to build the map
- Incorporates genomic information
- Explores multipoint information (through Hidden Markov Chain Model)
- Can handle high ploidies: up to 8x when using HMM, and up to 12x when using the two-point approach
- Can reconstruct haplotypes for parents and all individuals in the population
- Recovers the multiallelic nature of polyploid genomes
- Detects occurrence, location and frequency of multivalent pairing during meiosis
- Robust enough to build genetic linkage maps with multiallelic markers (when available)
1.1 MAPpoly installation
1.2 From CRAN (stable version)
To install MAPpoly from the The Comprehensive R Archive Network (CRAN) use
install.packages("mappoly")
1.3 From GitHub (development version)
You can install the development version from Git Hub. Within R, you need to install devtools:
install.packages("devtools")
If you are using Windows, you must install the the latest recommended version of Rtools.
To install MAPpoly from Git Hub use
devtools::install_github("mmollina/mappoly", dependencies=TRUE)
2 Loading datasets
In its current version, MAPpoly can handle the following types of datasets:
- CSV files
- MAPpoly files
- Dosage based
- Probability based
- fitPoly files
- VCF files
MAPpoly also is capable of importing objects generated by the following R packages
Both CSV and MAPpoly datasets are sensible to formatting errors, such as additional spaces, commas and wrong encoding (non-UTF-8). If you have any trouble, please double check your files before submitting an issue. You can find detailed steps of all supported files in the following sections. Also, in large datasets, it is expected that a considerable proportion of markers will contain redundant information. Thus, all reading functions will set these redundant markers aside to be included in the final map.
2.1 Reading CSV files
The preparation of a CSV file for MAPpoly is relatively straightforward. It can be done in Microsoft Excel or any other spreadsheet software of your preference. In this file, each line comprehends a marker and each column comprehends information about the marker. In its current version, MAPpoly can handle .csv files with allelic dosage data.
The first line of the CSV file should contain headers for all columns. The first five columns should include the following information: marker name, dosage for both parents (one column for each), a sequence number (e.g. a chromosome number, if available) and a sequence position (e.g. the marker position within the chromosome, if available). In addition to these five headers, you should include the name of all individuals in the population. From the second line onwards, all columns should contain its values, including allelic dosages for all individuals. Missing or absent values should be represented by NA.
NOTE: If genomic information is not available, the ‘sequence’ and ‘sequence position’ columns should be filled with NA’s.
Example:
Figure 1: Example of CSV data set
Important note: avoid spaces in .csv files. As mentioned above, please double check your datasets for extra spaces, commas, dots and encoding. Your CSV file should be encoded using UTF-8.
You can read CSV files with the read_geno_csv function:
ft="https://raw.githubusercontent.com/mmollina/SCRI/main/data/B2721_dose.csv"
tempfl <- tempfile()
download.file(ft, destfile = tempfl)
dat.dose.csv <- read_geno_csv(file.in = tempfl, ploidy = 4)
#> Reading the following data:
#> Ploidy level: 4
#> No. individuals: 156
#> No. markers: 7093
#> No. informative markers: 7093 (100%)
#> This dataset contains sequence information.
#> ...
#> Done with reading.
#> Filtering non-conforming markers.
#> ...
#> Done with filtering.
unlink(tempfl)
dat.dose.csv
In addition to the CSV file path, you must indicate the ploidy level using the ploidy argument. This function automatically excludes uninformative markers. It also performs chi-square tests for all markers, considering the expected segregation under Mendelian inheritance, random chromosome pairing and no double reduction. You can optionally use the filter.non.conforming logical argument (default = TRUE), which excludes non-expected genotypes under these assumptions. However, keep in mind that the linkage analysis functions in MAPpoly do not support double-reduction in its current version, and this can cause software failures.
2.2 Reading MAPpoly files
MAPpoly can also handle two dataset types that follow the same format: (1) a genotype-based file (with allelic dosages) and (2) probability-based file. Both are text files with the same header, but with different genotype table formats.
For both files, the header should contain: ploidy level, number of individuals (nind), number of markers (nmrk), marker names (mrknames), individual names (indnames), allele dosages for parent 1 (dosageP), allele dosages for parent 2 (dosageQ), sequence/chromosome information (seq), position of each marker (seqpos), number of phenotypic traits (nphen) and the phenotypic data (pheno) if available. The header should be organized according to this example:
ploidy 4
nind 3
nmrk 5
mrknames M1 M2 M3 M4 M5
indnames Ind1 Ind2 Ind3
dosageP 0 2 0 0 3
dosageQ 1 2 1 1 3
seq 1 1 2 2 3
seqpos 100 200 50 150 80
nphen 0
pheno-----------------------
geno------------------------
For more information about MAPpoly file format, please see ?read_geno and ?read_geno_prob documentation from MAPpoly package.
2.2.1 Using read_geno
The header should be followed by a table containing the genotypes (allele dosages) for each marker (rows) and for each individual (columns), as follows:
| Individual 1 | Individual 2 | Individual 3 | |
|---|---|---|---|
| Marker 1 | 1 | 0 | 0 |
| Marker 2 | 3 | 0 | 2 |
| Marker 3 | 1 | 0 | 0 |
| Marker 4 | 1 | 0 | 0 |
| Marker 5 | 3 | 4 | 4 |
The final file should look like the example below:
ploidy 4
nind 3
nmrk 5
mrknames M1 M2 M3 M4 M5
indnames Ind1 Ind2 Ind3
dosageP 0 2 0 0 3
dosageQ 1 2 1 1 3
seq 1 1 2 2 3
seqpos 100 200 50 150 80
nphen 0
pheno-----------------------
geno------------------------
1 0 0
3 0 2
1 0 0
1 0 0
3 4 4
Then, use the read_geno function to read your file:
fl = "https://raw.githubusercontent.com/mmollina/SCRI/main/data/B2721_mappoly_dose"
tempfl <- tempfile()
download.file(fl, destfile = tempfl)
dat.dose.mpl <- read_geno(file.in = tempfl, elim.redundant = TRUE)
#> Reading the following data:
#> Ploidy level: 4
#> No. individuals: 156
#> No. markers: 6502
#> No. informative markers: 6502 (100%)
#> This dataset contains sequence information.
#> ...
#>
#> Done with reading.
#> Filtering non-conforming markers.
#> ...
#> Performing chi-square test.
#> ...
#> Done.
unlink(tempfl)
dat.dose.mpl
2.2.2 Using read_geno_prob
Following the same header described before, read_geno_prob reads a table containing the probability distribution for each combination of marker \(\times\) individual. Each line on this table represents the combination of one marker with one individual, and its respective probabilities of having each possible allele dosage. The first two columns represent the marker and the individual, respectively, and the remaining elements represent the probability associated with each one of the possible dosages, as follows:
| Marker | Individual | \(p(d=0)\) | \(p(d=1)\) | \(p(d=2)\) | \(p(d=3)\) | \(p(d=4)\) |
|---|---|---|---|---|---|---|
| M1 | Ind1 | 0.5 | 0.5 | 0.0 | 0.0 | 0.0 |
| M2 | Ind1 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 |
| M3 | Ind1 | 0.3 | 0.7 | 0.0 | 0.0 | 0.0 |
| M4 | Ind1 | 0.5 | 0.5 | 0.0 | 0.0 | 0.0 |
| M5 | Ind1 | 0.0 | 0.0 | 0.0 | 0.9 | 0.1 |
| M1 | Ind2 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| M2 | Ind2 | 0.2 | 0.5 | 0.3 | 0.0 | 0.0 |
| M3 | Ind2 | 0.9 | 0.1 | 0.0 | 0.0 | 0.0 |
| M4 | Ind2 | 0.9 | 0.1 | 0.0 | 0.0 | 0.0 |
| M5 | Ind2 | 0.0 | 0.0 | 0.0 | 0.2 | 0.8 |
| M1 | Ind3 | 0.2 | 0.8 | 0.0 | 0.0 | 0.0 |
| M2 | Ind3 | 0.4 | 0.6 | 0.0 | 0.0 | 0.0 |
| M3 | Ind3 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| M4 | Ind3 | 0.0 | 0.1 | 0.9 | 0.0 | 0.0 |
| M5 | Ind3 | 0.1 | 0.9 | 0.0 | 0.0 | 0.0 |
Notice that each marker \(\times\) individual combination have \(m+1\) associated probabilities, being \(m\) the ploidy level and \(m+1\) the number of possible allele dosages. Also, each line must sum to 1. The final file (header + table) should look like the following example:
ploidy 4
nind 3
nmrk 5
mrknames M1 M2 M3 M4 M5
indnames Ind1 Ind2 Ind3
dosageP 0 2 0 0 3
dosageQ 1 2 1 1 3
seq 1 1 2 2 3
seqpos 100 200 50 150 80
nphen 0
pheno-----------------------
geno------------------------
M1 Ind1 0.5 0.5 0.0 0.0 0.0
M2 Ind1 0.0 1.0 0.0 0.0 0.0
M3 Ind1 0.3 0.7 0.0 0.0 0.0
M4 Ind1 0.5 0.5 0.0 0.0 0.0
M5 Ind1 0.0 0.0 0.0 0.9 0.1
M1 Ind2 1.0 0.0 0.0 0.0 0.0
M2 Ind2 0.2 0.5 0.3 0.0 0.0
M3 Ind2 0.9 0.1 0.0 0.0 0.0
M4 Ind2 0.9 0.1 0.0 0.0 0.0
M5 Ind2 0.0 0.0 0.0 0.2 0.8
M1 Ind3 0.2 0.8 0.0 0.0 0.0
M2 Ind3 0.4 0.6 0.0 0.0 0.0
M3 Ind3 1.0 0.0 0.0 0.0 0.0
M4 Ind3 0.0 0.1 0.9 0.0 0.0
M5 Ind3 0.1 0.9 0.0 0.0 0.0
To read the dataset, one should use:
ft="https://raw.githubusercontent.com/mmollina/SCRI/main/data/B2721_mappoly_prob"
tempfl <- tempfile()
download.file(ft, destfile = tempfl)
dat.prob.mpl <- read_geno_prob(file.in = tempfl, prob.thres = 0.95, elim.redundant = TRUE)
#> Reading the following data:
#> Ploidy level: 4
#> No. individuals: 156
#> No. markers: 6502
#> No. informative markers: 6502 (100%)
#> This dataset contains sequence information.
#> ...
#> Done with reading.
#> Filtering non-conforming markers.
#> ...
#> Performing chi-square test.
#> ...
#> Done.
unlink(tempfl)
dat.prob.mpl
Important note: as this type of file contains the probability distribution of all possible dosages and it will take longer to read.
You can define the minimum probability value necessary to call a dosage using the prob.thres argument. If the higher probability for a marker \(\times\) individual passes this threshold, then its associated dosage is used. However, if none of the probabilities reach this threshold, then its dosage is considered missing (NA). It also perform the same filtering described for function read_geno
2.3 Reading VCF files
MAPpoly can also handle VCF files (>= V.4.0) generated by the most softwares, such as TASSEL, GATK, and Stacks. As few of these softwares can handle poliploidy and estimate genotypes in a satisfactory manner, you may use other softwares developed to estimate the allele dosages. Briefly, these programs use the allele read counts (or intensity) for each marker \(\times\) individual combination, and determines which is the most likely allele dosage. Examples of these softwares are SuperMASSA, fitPoly, ClusterCall, updog, and PolyRAD. After allele dosage estimation, your VCF file should contain values for the field GT similar to 1/1/1/0 (a triplex marker in an autotetraploid, for example) rather than 1/0. Since MAPpoly uses dosages (or their probabilities) to build genetic maps, we strongly recommend using softwares to estimate such dosages before building the map. fitPoly, PolyRAD, and updog have direct integration with MAPpoly, as will be described in the next sections.
To demonstrate the read_vcf function, lets download an autohexaploid sweetpotato VCF file from the MAPpoly’s vignettes repository on Github and read it:
download.file("https://github.com/mmollina/MAPpoly_vignettes/raw/master/data/BT/sweetpotato_chr1.vcf.gz", destfile = 'chr1.vcf.gz')
dat.dose.vcf = read_vcf(file = 'chr1.vcf.gz', parent.1 = "PARENT1", parent.2 = "PARENT2", verbose = FALSE)
#> Registered S3 method overwritten by 'vegan':
#> method from
#> rev.hclust dendextend
dat.dose.vcf
Besides the path to your VCF file, you should indicate parent.1 and parent.2 names. Parent names must be exactly the same strings as in the VCF file. The ploidy level will be automatically detected, but you may indicate it using the optional ploidy argument. With this argument, the function will check for possible errors in your dataset. For species with variable ploidy levels (i.e. sugarcane), please indicate the desired ploidy level using the ploidy argument; if absent, MAPpoly will use the smallest ploidy level detected.
This function also has options to filter out undesired markers or data points, such as those with low depth or high proportion of missing data. You can define the following filter arguments: set min.av.depth to any integer level in order to remove markers that show average depth below this value (default = 0); set min.gt.depth to any integer level in order to remove data points that present depth below this value (default = 0); set max.missing to any value between 0 and 1 in order to remove markers that present missing data proportion above this value (default = 1). read_vcf also perform the same filtering described for function read_geno. The p-value threshold used by the segregation test can be defined by the thresh.line argument.
Please notice that the returning object from read_vcf has some additional information when compared to previous functions: reference and alternative alleles (bases) for each marker; and average depth of each marker. You can inspect all marker depths using the following code as an example:
library(ggplot2)
dosage_combs = cbind(dat.dose.vcf$dosage.p, dat.dose.vcf$dosage.q)
dc_simplex = apply(dosage_combs,1,function(x) if(all(c(0,1) %in% x) | all(c(dat.dose.vcf$m-1, dat.dose.vcf$m) %in% x)) return(TRUE) else return(FALSE))
dc_dsimplex = apply(dosage_combs,1,function(x) if(all(x == c(1,1)) | all(x == c(dat.dose.vcf$m-1, dat.dose.vcf$m-1))) return(TRUE) else return(FALSE))
dc_simplex[which(dc_simplex == TRUE)] = "simplex"
dc_simplex[which(dc_dsimplex == TRUE)] = 'double simplex'
dc_simplex[which(dc_simplex == 'FALSE')] = 'multiplex'
data_depths = data.frame('Marker depths' = dat.dose.vcf$all.mrk.depth,
'Depth classes' = findInterval(dat.dose.vcf$all.mrk.depth, c(200,300,400,500,600,50000)),
'Dosage combinations' = dc_simplex, check.names = F)
ggplot(data_depths, aes(fill=`Dosage combinations`, x=`Depth classes`, y=`Marker depths`)) +
geom_bar(position = 'stack', stat = 'identity') +
scale_x_continuous(breaks=0:5, labels=c("[0,200)","[200,300)","[300,400)","[400,500)","[500,600)", "> 600"))
2.4 Importing data from third party packages
2.4.1 fitPoly
You can export datasets generated using the fitPoly package using
## Downloading and reading B2721 fitPoly's probabilistic scores
address <- "https://github.com/mmollina/SCRI/raw/main/data/fitpoly_tetra_call/B2721_scores.zip"
tempfl <- tempfile(pattern = "B2721_", fileext = ".zip")
download.file(url = address, destfile = tempfl)
unzip(tempfl, files = "B2721_scores.dat")
unlink(tempfl)
dat <- read_fitpoly(file.in = "B2721_scores.dat",
ploidy = 4,
parent1 = "Atlantic",
parent2 = "B1829",
verbose = TRUE)
#> Reading the following data:
#> Ploidy level: 4
#> No. individuals: 156
#> No. markers: 7867
#> No. informative markers: 7093 (90.2%)
#> ...
#> Done with reading.
#> Filtering non-conforming markers.
#> ...
#> Performing chi-square test.
#> ...
#> Done.
The input file should be generated using the function fitPoly::saveMarkerModels. Here is an example of genotype calling using saveMarkerModels. You can also, if available, include the genome information in the dataset. Here is an example of including the Solanum tuberosum genome v4.03 information.
source("https://raw.githubusercontent.com/mmollina/SCRI/main/MAPpoly/get_solcap_snp_pos.R")
2.4.2 PolyRAD
The R package PolyRAD has its own function to export genotypes to the MAPpoly’s genotype probability distribution format. One may use the commands above to import from PolyRAD:
# load example dataset from polyRAD
library(polyRAD)
data(exampleRAD_mapping)
exampleRAD_mapping = SetDonorParent(exampleRAD_mapping, "parent1")
exampleRAD_mapping = SetRecurrentParent(exampleRAD_mapping, "parent2")
exampleRAD_mapping = PipelineMapping2Parents(exampleRAD_mapping)
#> Making initial parameter estimates...
#> Generating sampling permutations for allele depth.
#> Updating priors using linkage...
#> Done.
# export to MAPpoly
outfile2 = tempfile()
Export_MAPpoly(exampleRAD_mapping, file = outfile2)
# Read in MAPpoly
mydata_polyrad = read_geno_prob(outfile2)
#> Reading the following data:
#> Ploidy level: 2
#> No. individuals: 100
#> No. markers: 4
#> No. informative markers: 4 (100%)
#> This dataset contains sequence information.
#> ...
#> Done with reading.
#> Filtering non-conforming markers.
#> ...
#> Performing chi-square test.
#> ...
#> Done.
mydata_polyrad
2.4.3 updog
You can also use the MAPpoly’s function import_from_updog to import any dataset generated by updog’s function multidog, following the example below:
# Load example dataset from updog
library(updog)
data(uitdewilligen)
mout = multidog(refmat = t(uitdewilligen$refmat),
sizemat = t(uitdewilligen$sizemat),
ploidy = uitdewilligen$ploidy,
model = "f1",
p1_id = colnames(t(uitdewilligen$sizemat))[1],
p2_id = colnames(t(uitdewilligen$sizemat))[2],
nc = 4)
mydata_updog = import_from_updog(mout)
mydata_updog
Please notice that updog removes both sequence and sequence position information that may be present in the VCF file. We highly recommend that you use this information during the linkage map building, when available.
2.5 Combining multiple datasets
It is not rare to have two or more datasets from the same population or individuals from different sources of molecular data, such as SNP chips, GBS and/or microsatellites. MAPpoly can combine datasets when individual names are the same, using the function merge_datasets. To demonstrate its functionality, lets download two VCF files (autohexaploid sweetpotato) from the MAPpoly vignettes repository on Github, and read them using read_vcf function:
# Downloading VCF files regarding chromosome 1 and 2
download.file("https://github.com/mmollina/MAPpoly_vignettes/raw/master/data/BT/sweetpotato_chr1.vcf.gz", destfile = 'chr1.vcf.gz')
download.file("https://github.com/mmollina/MAPpoly_vignettes/raw/master/data/BT/sweetpotato_chr2.vcf.gz", destfile = 'chr2.vcf.gz')
data1 = read_vcf(file = 'chr1.vcf.gz', parent.1 = "PARENT1", parent.2 = "PARENT2", verbose = FALSE)
#> Registered S3 method overwritten by 'vegan':
#> method from
#> rev.hclust dendextend
data2 = read_vcf(file = 'chr2.vcf.gz', parent.1 = "PARENT1", parent.2 = "PARENT2", verbose = FALSE)
As we can see, both have different markers for the same population:
# See datasets
print(data1)
print(data2)
Now lets merge them and see the output:
# Merge datasets
merged_data = merge_datasets(data1, data2)
print(merged_data)
Notice that all markers of both datasets were merged successfully, which allows using just one (merged) dataset in the following steps of map construction.
2.6 Exploratory Analysis
2.6.1 Whole dataset
For the purpose of this tutorial, we will keep using the tetraploid potato array data (loaded using the examples above). We will construct a genetic map of the B2721 population, which is a cross between two tetraploid potato varieties: Atlantic and B1829-5. The population comprises 160 offsprings genotyped with the SolCAP Infinium 8303 potato array. The dataset also contains the genomic order of the SNPs from the Solanum tuberosum genome version 4.03. The genotype calling was performed with fitPoly R package using this pipeline. Another option would be to use ClusterCall and this pipeline.
Once the data is loaded, you can explore the dataset using the print function:
print(dat, detailed = TRUE)
This function shows information about the dataset including the ploidy level, total number of individuals, total number of markers, number of informative markers, proportion of missing data and redundant markers. If detailed = TRUE, the function also outputs the number of markers in each sequence, if available, and the number of markers contained in all possible dosage combinations between both parents.
You can also explore the dataset visually using the plot function:
plot(dat)
The output figure shows a bar plot on the left-hand side with the number of markers contained in each allele dosage combination between both parents. The right labels indicate allele dosages for Parent 1 and Parent 2, respectively. The upper-right plot contains the \(\log_{10}(p-value)\) from \(\chi^2\) tests for all markers, considering the expected segregation patterns under Mendelian inheritance. The lower-right plot contains a graphical representation of the allele dosages and missing data distribution for all markers and individuals. Finally, the bottom-right graphic shows the proportion of redundant markers in the dataset, when available.
2.6.2 Marker-specific
If you want to view a specific marker information, use the plot_mrk_info function. You should indicate your dataset object using the input.data argument, and the desired marker using the mrk argument. You can indicate the marker using its number or its name (string):
plot_mrk_info(input.data = dat, mrk = 1979)
The same result is produced using
plot_mrk_info(input.data = dat, mrk = 'solcap_snp_c2_17752')
When applied to a dosage-based dataset, the function outputs a figure showing: marker name and position in the dataset, allele dosage in parents 1 and 2, proportion of missing data, p-value of the associated \(\chi^2\) test for Mendelian segregation, sequence and position information (when available). The figure also contains a plot with the allele dosage and missing data distribution in the population. When applied to a probability-baaed dataset, the function outputs the probability threshold and a
3-dimensional plot containing the probability distribution for each allele dosage considering all individuals. You can also print
the absolute frequency of genotypes for marker solcap_snp_c2_17752 using
print_mrk(dat, mrks = 'solcap_snp_c2_17752')
#>
#> solcap_snp_c2_17752
#> ----------------------------------
#> dosage P1: 2
#> dosage P2: 2
#> ----
#> dosage distribution
#> 0 1 2 3 4 mis
#> 1 29 83 38 5 0
#> ----
#> expected polysomic segregation
#> 0 1 2 3 4
#> 0.02777778 0.22222222 0.50000000 0.22222222 0.02777778
#> ----------------------------------
3 Filtering and Quality Control
3.1 Missing data filtering
The function filter_missing filter markers and/or individuals that exceeds a defined threshold for missing data updating the dataset. The argument input.data should contain your dataset object, and you can choose to filter either by ‘marker’ or ‘individual’ using the type argument (string). You can also define the maximum proportion of missing data using the filter.thres argument (ranges from 0 to 1, i.e. a threshold of 0.05 will keep just markers or individuals with less than 5% of missing data). When TRUE (default), the inter argument plots markers or individuals vs. frequency of missing data.
# Filtering dataset by marker
dat <- filter_missing(input.data = dat, type = "marker",
filter.thres = 0.05, inter = FALSE)
print(dat)
# Filtering dataset by individual
dat <- filter_missing(input.data = dat, type = "individual",
filter.thres = 0.025, inter = FALSE)
print(dat)
In this dataset, 1229 markers presented a proportion of missing data above the defined threshold, while 12 individuals exceeded the defined threshold. Then we will use the final filtered dataset during the rest of the analysis. Notice that the function read_vcf also provides parameters to filter out markers depending on their average depth and missing data, as well as removes data points that do not reach a minimum pre-defined depth value. Check the read_vcf section for more information.
3.2 Segregation test
Another very important point to consider is the expected marker segregation pattern under Mendelian inheritance. Markers with segregation distortion can produce unreliable estimates and may be removed (at least for a while) from the dataset. A good test for that is the chi-square (\(\chi^2\)), which basically matches expected genotype frequencies against observed frequencies and calculates the associated p-value. In order to define the p-value threshold for the tests, we will use the Bonferroni correction:
\[\alpha_{thres} = \frac{\alpha}{\#markers}\]
We will also assume that only random chromosome bivalent pairing occurs and there is no double reduction.
source("https://raw.githubusercontent.com/mmollina/SCRI/main/MAPpoly/get_solcap_snp_pos.R")
pval.bonf <- 0.05/dat$n.mrk
mrks.chi.filt <- filter_segregation(dat, chisq.pval.thres = pval.bonf, inter = FALSE)
seq.init <- make_seq_mappoly(mrks.chi.filt)
Notice that filter_segregation does not produce a filtered dataset; it just tells you which markers follow the expected Mendelian segregation pattern. To select these markers from your dataset, you may use the make_seq_mappoly function.
plot(seq.init)
It is worth to mention that all redundant markers identified during data reading step are stored in the main dataset. The redundant markers are automatically removed and can be added back once the maps are finished using the function update_map (described in details later in this tutorial).
4 Two-point analysis
Once the markers are filtered and selected, we need to compute the pairwise recombination fraction between all of them (two-point analysis). The function est_pairwise_rf is used to estimate all the pairwise recombination fractions between markers in the sequence provided. Since the output object is too big to be fully displayed on the screen, MAPpoly shows a summary. Parallel computation is available and, if you want to use it, you need to define the available number of cores in your machine and also guarantee that you have sufficient RAM memory for that. Remember that it is always good to leave one core available for the system, and be aware that this step will take a while to compute if you have few cores available.
# Defining number of cores
n.cores = parallel::detectCores() - 1
#(~ 9.2 minutes using 23 cores)
all.rf.pairwise <- est_pairwise_rf(input.seq = seq.init, ncpus = n.cores)
#> INFO: Using 23 CPUs for calculation.
#> INFO: Done with 12427605 pairs of markers
#> INFO: Calculation took: 386.07 seconds
all.rf.pairwise
#> This is an object of class 'poly.est.two.pts.pairwise'
#> -----------------------------------------------------
#> No. markers: 4986
#> No. estimated recombination fractions: 11694917 (94.1%)
#> -----------------------------------------------------
To assess the recombination fraction between a particular pair of markers, say markers 2204 and 4508, we use the following syntax:
all.rf.pairwise$pairwise$`2204-4508`
#> LOD_ph rf LOD_rf
#> 1-1 0.000000 0.05688641 1.340883e+00
#> 2-1 -2.753497 0.43080101 1.098301e-01
#> 1-2 -2.753497 0.43080101 1.098301e-01
#> 1-0 -2.863464 0.49995672 1.369849e-04
#> 0-1 -2.863464 0.49995672 1.369849e-04
#> 2-2 -4.727155 0.47577396 6.847633e-02
#> 2-0 -4.795632 0.49993129 8.815329e-08
#> 0-2 -4.795632 0.49993129 8.815329e-08
#> 0-0 -4.795880 0.49995600 2.486217e-04
plot(all.rf.pairwise, first.mrk = 2204, second.mrk = 4508)
In this case, 2204-4508 represents the position of the markers in the filtered data set. The name of the rows have the form x-y, where x and y indicate how many homologous chromosomes share the same allelic variant in parents \(P1\) and \(P2\), respectively (see this figure in Mollinari and Garcia (2019) for notation). The first column indicates the LOD Score in relation to the most likely linkage phase configuration. The second column shows the estimated recombination fraction for each configuration, and the third indicates the LOD Score comparing the likelihood under no linkage (\(r = 0.5\)) with the estimated recombination fraction (evidence of linkage).
4.1 Assembling recombination fraction and LOD Score matrices
Recombination fraction and LOD Score matrices are fundamental in genetic mapping. Later in this tutorial, we will use these matrices as the basic information to order markers and also to perform some diagnostics. To convert the two-point object into recombination fraction and LOD Score matrices, we need to assume thresholds for the three columns observed in the previous output. The arguments thresh.LOD.ph and thresh.LOD.rf set LOD Scores thresholds for the second most likely linkage phase configuration and recombination fraction. Here we assume thresh.LOD.ph = 0 and thresh.LOD.rf = 0, thus no matter how likely is the second best option, all the computed values will be considered. The argument thresh.rf = 0.5 indicates that the maximum accepted recombination fraction is 0.5. To convert these values in a recombination fraction matrix, we use the function rf_list_to_matrix.
mat <- rf_list_to_matrix(input.twopt = all.rf.pairwise, ncpus = n.cores)
#> INFO: Using 23 CPUs.
#> INFO: Done with 12427605 pairs of markers
#> INFO: Operation took: 96.395 seconds
For big datasets, you may use the multi-core support to perform the conversions, using the parameter n.clusters to define the number of CPU’s. It is also possible to filter again for the parameters mentioned above, such as thresh.LOD.ph, thresh.LOD.rf and thresh.rf.
We can also plot this matrix using the reference genome order. For doing so, we use the function get_genomic_order to get the genomic order of the input sequence and use the resulting order to index the recombination fraction matrix. If the reference order is consistent with the marker order in this specific population, we should observe a block-diagonal matrix and a monotonic pattern within each sub-matrix. Since the recombination fraction matrix dimensions for the whole genome are usually large, it is possible to summarize neighboring cells’ information using a pre-defined grid. The size of the grid is determined by the aggregation factor fact. Thus, using fact = 5, for instance, will average the recombination fractions of cells within a \(5 \times 5\) grid producing the following heatmap
id<-get_genomic_order(seq.init)
s.o <- make_seq_mappoly(id)
plot(mat, ord = s.o$seq.mrk.names, fact = 5)
As expected, we observe the block-diagonal matrix with monotonic patterns. In the previous case, the thresholds allowed to plot almost all points in the recombination fraction matrix. The empty cells in the matrix indicate markers where it is impossible to detect recombinant events using two-point estimates (e.g., between \(1 \times 0\) and \(0 \times 1\) marker). Yet, if the thresholds become more stringent (higher LODs and lower rf), the matrix becomes more sparse.
5 Assembling linkage groups
The function group_mappoly assign markers to linkage groups using the recombination fraction matrix obtained above. The user can provide an expected number of groups or run the interactive version of the function using inter = TRUE. Since in this data set we expect 12 linkage groups (basic chromosome number in potato), we use expected.groups = 12. If the data set provides the chromosome where the markers are located, the function allows comparing the groups obtained using the pairwise recombination fraction and the chromosome information provided using the comp.mat = TRUE.
grs <- group_mappoly(input.mat = mat,
expected.groups = 12,
comp.mat = TRUE,
inter = FALSE)
grs
Here, we have the 3639 markers distributed in 12 linkage groups. The rows indicate linkage groups obtained using linkage information and the columns are the chromosomes in the reference genome. Notice the diagonal indicating the concordance between the two sources of information. Now, we can plot the resulting marker cluster analysis.
plot(grs)
Once the linkage groups are properly assembled, we use the function make_seq_mappoly to make marker sequences from the group analysis. We will assemble a list with 12 positions, each one containing the corresponding linkage group sequence. Also, we will use only markers allocated in the diagonal of the previous comparison matrix. Thus only markers that were assigned to a particular linkage group using both sources of information will be considered. We will also assemble a smaller two-point object using the functions make_pairs_mappoly and rf_snp_filter to facilitate further parallelization procedures.
# genome correspondence
z <- as.numeric(colnames(grs$seq.vs.grouped.snp)[1:12])
LGS<-vector("list", 12)
for(j in 1:12){
temp1 <- make_seq_mappoly(grs, j, genomic.info = 1)
tpt <- make_pairs_mappoly(all.rf.pairwise, input.seq = temp1)
temp2 <- rf_snp_filter(input.twopt = tpt, diagnostic.plot = FALSE)
lgtemp <- get_genomic_order(temp2)
LGS[[z[j]]] <- list(lg = make_seq_mappoly(lgtemp), tpt = tpt)
}
Now, let us print the recombination fraction matrices for each linkage group.
#FIXME:—————– explain that we will construct the map foe one LG and after we will do it for all LGs
6 Estimating the map for a given order
In this section, we will estimate the genetic map for each linkage group. First, we will use the marker order provided by the Solanum tuberosum genome version 4.03. However, when a reference genome is not available or fully colinear to the population’s parents, we need to use de-novo algorithms to order the markers, which will be addressed later in this tutorial.
The estimation of the genetic map for a given order involves the computation of recombination fraction between adjacent markers and finding the linkage phase configuration of those markers in both parents. The core function to perform these tasks in MAPpoly is est_rf_hmm_sequential. This function uses the pairwise recombination fraction as the first source of information to sequentially position allelic variants in specific homologs. For situations where pairwise analysis has limited power, the algorithm relies on the likelihood obtained through a hidden Markov model (HMM) Mollinari and Garcia (2019). Once all markers are positioned, the final map is reconstructed using the HMM multipoint algorithm.
Several arguments are available to control the inclusion and phasing of the markers in the chain. The argument start.set defines the number of initial markers used in an exhaustive search for the most probable configuration. After that, markres are sequentially added to the end of the map. thres.twopt receives the threshold to whether when the linkage phases compared via two-point analysis should be considered, and the HMM analysis should not be used to infer the linkage phase (A. K. A. \(\eta\) in Mollinari and Garcia (2019)). thres.hmm receives the threshold for keeping competing maps computed using HMM (if the two-point analysis was not enough) in the next round of marker insertion. extend.tail indicates the number of markers that should be considered at the end of the chain to compute the multilocus genetic map when inserting a new marker. tol and tol.final receive the desired accuracy to estimate the sub-maps during the sequential phasing procedure and the desired accuracy in the final map. phase.number.limit receives the limit number of linkage phase configurations to be tested using HMM. info.tail is a logical argument: if TRUE it uses the complete informative tail (last markers in the chain that allow all homologous to be distinguished in the parents) of the chain to calculate the likelihood of the linkage phases.
First, as an example, let us estimate the map for linkage group 10. The values used for all arguments were obtained using a balance of processing speed and accuracy of the algorithm. As an exercise, it is interesting to try different values and check out the results. For now, let us stick with the following values (this step can take some hours to finish, depending on chosen parameters):
lg10.map <- est_rf_hmm_sequential(input.seq = LGS[[10]]$lg,
start.set = 10,
thres.twopt = 10,
thres.hmm = 10,
extend.tail = 30,
info.tail = TRUE,
twopt = LGS[[10]]$tpt,
sub.map.size.diff.limit = 10,
phase.number.limit = 20,
reestimate.single.ph.configuration = TRUE,
tol = 10e-3,
tol.final = 10e-4)
#> ════════════════════════════════════════════════════════════ Initial sequence ══
#> ══════════════════════════════════════════════════ Done with initial sequence ══
#> ══════════════════════════════════ Reestimating final recombination fractions ══
#> ════════════════════════════════════════════════════════════════════════════════
Now, use the functions print and plot to view the results:
print(lg10.map)
#> This is an object of class 'mappoly.map'
#> Ploidy level: 4
#> No. individuals: 144
#> No. markers: 210
#> No. linkage phases: 1
#>
#> ---------------------------------------------
#> Number of linkage phase configurations: 1
#> ---------------------------------------------
#> Linkage phase configuration: 1
#> map length: 129.03
#> log-likelihood: -1564.04
#> LOD: 0
#> ~~~~~~~~~~~~~~~~~~
plot(lg10.map)
Back rectangles indicate the presence of the allelic variant in each one of the four homologous in both parents. One also can print a detailed version of the map and plot a specific map segment using the following code.
print(lg10.map, detailed = TRUE)
plot(lg10.map, left.lim = 30, right.lim = 40, mrk.names = TRUE)
6.1 Reestimating the map considering genotyping errors
Though current technologies enabled the genotyping of thousands of SNPs, they are quite prone to genotyping errors, especially in polyploid species. One way to address this problem is to associate a probability distribution to each one of the markers and allow the HMM to update their probability. This procedure can be applied using either the probability distribution provided by the genotype calling software (loaded in MAPpoly using the function read_geno_prob) or assuming a global genotype error. For a detailed explanation of this procedure, please see Mollinari and Garcia (2019). Briefly, the use of a prior information will update the genotype of the markers based on a global chromosome structure. One ca use …
lg10.map.prob <- est_full_hmm_with_prior_prob(input.map = lg10.map)
plot(lg10.map.prob)
lg10.map.err <- est_full_hmm_with_global_error(input.map = lg10.map, error = 0.05)
plot(lg10.map.err)
Notice that a global genotyping error of 5% was used, and the resulting map was smaller than the previous one. Also, some markers were “attracted” and some markers were “repealed” as a result of the smaller confidence used for each marker genotype.
6.2 Reinserting redundant markers
As mentioned before, redundant markers are automatically removed during the analysis to reduce computational calculations. Besides that, one may want to see the genetic linkage maps considering all markers in the full dataset, including the redundant ones. The addition of redundant markers to a map can be done by just calling the function update_map. Here we will show the addition in the previous map:
lg10.map.updated = update_map(lg10.map.err)
#> Updating map 1
lg10.map.err
#> This is an object of class 'mappoly.map'
#> Ploidy level: 4
#> No. individuals: 144
#> No. markers: 210
#> No. linkage phases: 1
#>
#> ---------------------------------------------
#> Number of linkage phase configurations: 1
#> ---------------------------------------------
#> Linkage phase configuration: 1
#> map length: 94.4
#> log-likelihood: -2146.67
#> LOD: 0
#> ~~~~~~~~~~~~~~~~~~
lg10.map.updated
#> This is an object of class 'mappoly.map'
#> Ploidy level: 4
#> No. individuals: 144
#> No. markers: 230
#> No. linkage phases: 1
#>
#> ---------------------------------------------
#> Number of linkage phase configurations: 1
#> ---------------------------------------------
#> Linkage phase configuration: 1
#> map length: 94.4
#> log-likelihood: -2146.67
#> LOD: 0
#> ~~~~~~~~~~~~~~~~~~
As can be seen, both maps are identical except for the number of markers.
7 Ordering markers using MDS and reestimating the map
So far the map was reestimated using the genomic order. In real situations, unless a genomic information is provided, the markers need to be ordered using an optimization technique. Here, we use the MDS (multidimensional scaling) algorithm, proposed in the context of genetic mapping by Preedy and Hackett (2016). The MDS algorithm requires a recombination fraction matrix, which will be transformed in distances using a mapping function (in this case the Haldane’s mapping function). First, let us gather the pairwise recombination fractions for all three linkage groups:
mt <- lapply(LGS, function(x) rf_list_to_matrix(x$tpt))
Now, for each matrix contained in the object mt, we use the MDS algorithm:
mds.ord <- lapply(mt, mds_mappoly)
Usually at this point, the user can make use of diagnostic plots to remove markers that are disturbing the ordering procedure. Here we didn’t use that procedure, but we encourage the user to check the example in ?mds_mappoly. Now, let us compare the estimated and the genomic orders (feel free to run the last commented line and see interactive plots):
LGS.mds<-vector("list", 12)
for(j in 1:12){
lgtemp <- make_seq_mappoly(mds.ord[[j]])
LGS.mds[[j]] <- list(lg = lgtemp,
tpt = make_pairs_mappoly(all.rf.pairwise, input.seq = lgtemp))
}
geno.vs.mds <- NULL
for(i in 1:length(LGS.mds)){
geno.vs.mds<-rbind(geno.vs.mds,
data.frame(mrk.names = LGS.mds[[i]]$lg$seq.mrk.names,
mds.pos = seq_along(LGS.mds[[i]]$lg$seq.mrk.names),
genomic.pos = order(LGS.mds[[i]]$lg$sequence.pos),
LG = paste0("LG_", i)))
}
require(ggplot2)
p<-ggplot(geno.vs.mds, aes(genomic.pos, mds.pos)) +
geom_point(alpha = 1/5, aes(colour = LG)) +
facet_wrap(~LG) + xlab("Genome Order") + ylab("Map Order")
p
#plotly::ggplotly(p)
Although several local inconsistencies occurred, the global diagonal pattern indicates a consistent order for all linkage groups using both approaches. Now, let us build the genetic map of linkage group 3 using the MDS order (remember that this can take a while to finish):
lg10.map.mds <- est_rf_hmm_sequential(input.seq = LGS.mds[[10]]$lg,
start.set = 10,
thres.twopt = 10,
thres.hmm = 10,
extend.tail = 30,
info.tail = TRUE,
twopt = LGS.mds[[10]]$tpt,
sub.map.size.diff.limit = 10,
phase.number.limit = 20,
reestimate.single.ph.configuration = TRUE,
tol = 10e-3,
tol.final = 10e-4)
#> ════════════════════════════════════════════════════════════ Initial sequence ══
#> ══════════════════════════════════════════════════ Done with initial sequence ══
#> ══════════════════════════════════ Reestimating final recombination fractions ══
#> ════════════════════════════════════════════════════════════════════════════════
And plot the map:
plot(lg10.map.mds)
It is also possible to compare the maps using both genomic-based and MDS-based orders with the function plot_map_list:
plot_map_list(list(genome = lg10.map, mds = lg10.map.mds), col = c("#E69F00", "#56B4E9"), title = "")
The genomic-based map included the same number of markers but is smaller than the MDS-based map, which indicates a better result. To formally compare the maps, one needs to select the markers that are present in both maps. Interestingly enough, both procedures included the same markers in the final map; however, we provide the code to perform the comparison even if the maps share only a subset of markers:
mrks.in.gen<-intersect(lg10.map$maps[[1]]$seq.num, lg10.map.mds$maps[[1]]$seq.num)
mrks.in.mds<-intersect(lg10.map.mds$maps[[1]]$seq.num, lg10.map$maps[[1]]$seq.num)
if(cor(mrks.in.gen, mrks.in.mds) < 0){
mrks.in.mds <- rev(mrks.in.mds)
lg10.map.mds <- rev_map(lg10.map.mds)
}
map.comp.3.gen<-get_submap(input.map = lg10.map, match(mrks.in.gen, lg10.map$maps[[1]]$seq.num), verbose = FALSE)
map.comp.3.mds<-get_submap(input.map = lg10.map.mds, match(mrks.in.mds, lg10.map.mds$maps[[1]]$seq.num), verbose = FALSE)
prob.3.gen<-extract_map(lg10.map)
prob.3.mds<-extract_map(lg10.map.mds)
names(prob.3.gen)<-map.comp.3.gen$maps[[1]]$seq.num
names(prob.3.mds)<-map.comp.3.mds$maps[[1]]$seq.num
matplot(t(data.frame(prob.3.gen,prob.3.mds[names(prob.3.gen)])),
type="b", pch="_", col=1, lty=1, lwd = .5, xlab= "",
ylab="Marker position (cM)", axes = F)
axis(2)
mtext(text = round(map.comp.3.gen$maps[[1]]$loglike,1), side = 1, adj = 0)
mtext(text = round(map.comp.3.mds$maps[[1]]$loglike,1), side = 1, adj = 1)
mtext(text = "Genomic", side = 3, adj = 0)
mtext(text = "MDS", side = 3, adj = 1)
Please notice that these maps have the same local inversions shown in the dot plots presented earlier. In this case, the log-likelihood of the genomic order is higher than the one obtained using the MDS order. For this linkage group, the genomic-based map was chosen as the best one.
8 Parallel map construction
8.1 Using one core by LG
Now, the mapping procedure will be applied to all linkage groups using parallelization. Although users must be encouraged to compare both MDS and genomic orders following the previous example, the genomic order will be considered as an example (remember that this step will take a long time to run):
## ~13.3 min
## Performing parallel computation
my.phase.func<-function(X){
x<-est_rf_hmm_sequential(input.seq = X$lg,
start.set = 10,
thres.twopt = 10,
thres.hmm = 10,
extend.tail = 30,
info.tail = TRUE,
twopt = X$tpt,
sub.map.size.diff.limit = 8,
phase.number.limit = 10,
reestimate.single.ph.configuration = TRUE,
tol = 10e-3,
tol.final = 10e-4)
return(x)
}
cl <- parallel::makeCluster(12)
parallel::clusterEvalQ(cl, require(mappoly))
parallel::clusterExport(cl, "dat")
MAPs <- parallel::parLapply(cl,LGS,my.phase.func)
parallel::stopCluster(cl)
A traditional linkage map plot can be generated including all linkage groups, using the function plot_map_list:
plot_map_list(MAPs, col = "ggstyle")
Following the reconstruction of LG 3 shown before, let us consider a global genotyping error of 5% to reestimate the final maps:
my.error.func<-function(X){
x<-est_full_hmm_with_global_error(input.map = X,
error = 0.05,
tol = 10e-4,
verbose = FALSE)
return(x)
}
cl <- parallel::makeCluster(12)
parallel::clusterEvalQ(cl, require(mappoly))
parallel::clusterExport(cl, "dat")
MAP.err <- parallel::parLapply(cl,MAPs,my.error.func)
parallel::stopCluster(cl)
Comparing both results:
all.MAPs <- NULL
for(i in 1:12)
all.MAPs<-c(all.MAPs, MAPs[i], MAP.err[i])
plot_map_list(map.list = all.MAPs, col = rep(c("#E69F00", "#56B4E9"), 12))
Then, the map that included modeling of genotype errors was chosen as the best one.
plot_map_list(MAP.err, col = "ggstyle")
8.2 Map vs. genome
plot_genome_vs_map(MAP.err, same.ch.lg = TRUE)
8.3 Map summary
After building two or more maps, one may want to compare summary statistics of those maps regarding the same chromosome, or even across chromosomes. A brief comparison can be done using the function summary_maps, which generates a table containing these statistics based on a list of mappoly.map objects:
knitr::kable(summary_maps(MAPs))
#>
#> Your dataset contains removed (redundant) markers. Once finished the maps, remember to add them back with the function 'update_map'.
| LG | Genomic sequence | Map length (cM) | Markers/cM | Simplex | Double-simplex | Multiplex | Total | Max gap |
|---|---|---|---|---|---|---|---|---|
| 1 | 1 | 233.52 | 1.73 | 97 | 106 | 200 | 403 | 8.32 |
| 2 | 2 | 192.19 | 2.13 | 125 | 131 | 153 | 409 | 3.24 |
| 3 | 3 | 166.62 | 2.29 | 151 | 25 | 205 | 381 | 4.87 |
| 4 | 4 | 222.27 | 1.88 | 111 | 79 | 228 | 418 | 5.69 |
| 5 | 5 | 143.31 | 2.09 | 113 | 53 | 133 | 299 | 6.1 |
| 6 | 6 | 170.31 | 2.25 | 64 | 76 | 244 | 384 | 3.94 |
| 7 | 7 | 130.43 | 2.9 | 138 | 73 | 167 | 378 | 6.13 |
| 8 | 8 | 123.23 | 2.53 | 56 | 95 | 161 | 312 | 5.15 |
| 9 | 9 | 150.22 | 2.12 | 73 | 92 | 154 | 319 | 6.52 |
| 10 | 10 | 127.74 | 1.6 | 53 | 26 | 126 | 205 | 6.17 |
| 11 | 11 | 141.63 | 2.24 | 85 | 47 | 185 | 317 | 9.27 |
| 12 | 12 | 146.56 | 1.7 | 112 | 24 | 113 | 249 | 3.69 |
| Total | NA | 0 | 2.12 | 1178 | 827 | 2069 | 4074 | 5.76 |
9 Genotype conditional probabilities
In order to use the genetic map in QTLpoly, one needs to obtain the conditional probability of all possible 36 genotypes along the 12 linkage groups for all individuals in the full-sib population. Let us use the function calc_genoprob_error, which similarly to est_full_hmm_with_global_error allows the inclusion of a global genotyping error:
genoprob.err <- vector("list", 12)
for(i in 1:12)
genoprob.err[[i]] <- calc_genoprob_error(input.map = MAP.err[[i]], error = 0.05)
Here, a global genotyping error of 5% was used. Each position of any object in the list genoprob.err contains two elements: an array of dimensions \(36 \times number \; of \; markers \times number \; of \; individuals\) and the position of the markers in the maps in centimorgans. Let us display the results for all linkage groups in individual 1:
ind <- 1
op <- par(mfrow = c(3, 4), pty = "s", mar=c(1,1,1,1))
for(i in 1:12)
{
d <- genoprob.err[[i]]$map
image(t(genoprob.err[[i]]$probs[,,ind]),
col=RColorBrewer::brewer.pal(n=9 , name = "YlOrRd"),
axes=FALSE,
xlab = "Markers",
ylab = "",
main = paste("LG", i))
axis(side = 1, at = d/max(d),
labels =rep("", length(d)), las=2)
}
par(op)
In this figure, the x-axis represents the genetic map and the y-axis represents the 36 possible genotypes in the full-sib population. The color scale varies from dark purple (high probabilities) to light yellow (low probabilities). With the conditional probabilities computed, it is possible to use the object genoprob.err alongside with phenotypic data as the input of the software QTLpoly, which is an under development software to map multiple QTLs in full-sib families of outcrossing autopolyploid species.
10 Obtaining individual haplotypes
Once ready, the genotypic conditional probabilities can be used to recover any individual haplotype given the map (details described in Mollinari et al. (2020)). To generate this information, one may use the function calc_homoprob to account for the probabilities of each homologous, in all map positions for each individual. For example, let us view the homologous probabilities for chromosome 1 and individual 10:
homoprobs = calc_homoprob(genoprob.err)
#>
#> Linkage group 1 ...
#> Linkage group 2 ...
#> Linkage group 3 ...
#> Linkage group 4 ...
#> Linkage group 5 ...
#> Linkage group 6 ...
#> Linkage group 7 ...
#> Linkage group 8 ...
#> Linkage group 9 ...
#> Linkage group 10 ...
#> Linkage group 11 ...
#> Linkage group 12 ...
plot(homoprobs, lg = 1, ind = 10)
Using this graphic, it is possible to identify regions of crossing-over occurrence, represented by the inversion of probability magnitudes between homologous from the same parent. It is also possible to view all chromosomes at the same time for any individual by setting the parameter lg = "all". One may use this information to evaluate the quality of the map and repeat some processes with modifications, if necessary.
11 Evaluating the meiotic process
MAPpoly also handles a function to evaluate the meiotic process that guided gamete formation on the studied population. Given genotype conditional probabilities, one may want to account for homologous pairing probabilities and detect the occurrence of preferential pairing, which is possible through the function calc_prefpair_profiles:
prefpairs = calc_prefpair_profiles(genoprob.err)
#>
#> Linkage group 1 ...
#> Linkage group 2 ...
#> Linkage group 3 ...
#> Linkage group 4 ...
#> Linkage group 5 ...
#> Linkage group 6 ...
#> Linkage group 7 ...
#> Linkage group 8 ...
#> Linkage group 9 ...
#> Linkage group 10 ...
#> Linkage group 11 ...
#> Linkage group 12 ...
The function returns an object of class mappoly.prefpair.profiles, which was saved as prefpairs. This object handles all information necessary to study pairing, such as the probability for each pairing configuration (\(\psi\); see Mollinari and Garcia (2019)) inside each parent. For a more user-friendly visualization of the results, one may want to look at the plot output:
plot(prefpairs)
#> `geom_smooth()` using method = 'loess' and formula 'y ~ x'
#save.image("all.analysis.rda")
This graphic shows information about all pairing configurations and their probabilities, the proportion of bivalent/multivalent pairing, and also the p-value for preferential pairing test for all markers inside each parent.
References
da Silva Pereira, Guilherme, Dorcus C Gemenet, Marcelo Mollinari, Bode A Olukolu, Joshua C Wood, Federico Diaz, Veronica Mosquera, et al. 2020. “Multiple QTL Mapping in Autopolyploids: A Random-Effect Model Approach with Application in a Hexaploid Sweetpotato Full-Sib Population.” Genetics 215 (3): 579–95. https://doi.org/10.1534/genetics.120.303080.
Mollinari, Marcelo, and Antonio Augusto Franco Garcia. 2019. “Linkage Analysis and Haplotype Phasing in Experimental Autopolyploid Populations with High Ploidy Level Using Hidden Markov Models.” G3 - Genes, Genomes, Genetics 9 (10): 3297–3314. https://doi.org/10.1534/g3.119.400378.
Mollinari, Marcelo, Bode A. Olukolu, Guilherme da S. Pereira, Awais Khan, Dorcus Gemenet, G. Craig Yencho, and Zhao-Bang Zeng. 2020. “Unraveling the Hexaploid Sweetpotato Inheritance Using Ultra-Dense Multilocus Mapping.” G3: Genes|Genomes|Genetics 10 (1): 281–92. https://doi.org/10.1534/g3.119.400620.
Preedy, K. F., and C. A. Hackett. 2016. “A Rapid Marker Ordering Approach for High-Density Genetic Linkage Maps in Experimental Autotetraploid Populations Using Multidimensional Scaling.” Theor. Appl. Genet. 129 (11): 2117–32. https://doi.org/10.1007/s00122-016-2761-8.